Skip to content

Conversation

@shariqriazz
Copy link
Contributor

@shariqriazz shariqriazz commented Sep 11, 2025

  • Add wandb provider implementation with OpenAI-compatible API
  • Add wandb models with full context window support (maxTokens = contextWindow) - EDIT: As requested this has now been changed to the documented max tokens from providers ( although the wandb provider offers context window as max context - input )
  • Add comprehensive error handling with localized messages
  • Add thinking token support for reasoning models
  • Add wandb API key settings to all language locales
  • Add wandb error translations to all supported languages
  • Support for multimodal models and reasoning effort
  • Proper token usage tracking and cost calculation

Important

Add Weights & Biases as a provider with full API support, error handling, and validation updates.

  • Provider Addition:
    • Add wandb provider with OpenAI-compatible API in wandb.ts.
    • Support for multimodal models and reasoning effort.
    • Implement token usage tracking and cost calculation.
  • API Key Management:
    • Add wandbApiKey to global-settings.ts and provider-settings.ts.
    • Update ApiOptions.tsx and Wandb.tsx for API key input and management.
  • Error Handling:
    • Add error translations for wandb in all supported languages in locales.
    • Implement comprehensive error handling in wandb.ts.
  • Validation:
    • Update validate.ts to include wandb API key validation.
  • Testing:
    • Add tests for WandbHandler in wandb.spec.ts.

This description was created by Ellipsis for 862f0390b070a926fc5899ba7847a729af6db303. You can customize this summary. It will automatically update as commits are pushed.

@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. enhancement New feature or request labels Sep 11, 2025
Copy link
Contributor

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution! I've reviewed the changes and have some suggestions for improvement. Overall, the implementation follows existing patterns well and includes comprehensive i18n support. The code is clean and well-structured.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the 20% maxTokens calculation intentional for all W&B models? I noticed this differs from other providers like OpenRouter which use varying percentages. Could we document why this specific ratio was chosen, or should it vary per model?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider at least logging these parsing errors for debugging purposes instead of silently ignoring them:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These token estimates seem quite rough (4 chars per token). Could we use a more accurate tokenizer, or at least document why these specific ratios were chosen? This could impact cost calculations significantly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test coverage could be improved. Consider adding tests for:

  • Thinking token stripping functionality ()
  • XmlMatcher integration for reasoning models
  • Image content handling in messages
  • Actual streaming response parsing with realistic data

These are critical features that should have test coverage.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The temperature is clamped to 0-2 range with a comment "W&B typically supports 0-2 range". Should this be configurable per model or documented in the model definitions? Different models might have different optimal ranges.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, should this also use the localized error format like in ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding JSDoc comments to the main class and its public methods. This would improve maintainability and help other developers understand the API better.

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Sep 11, 2025
@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Prelim Review] in Roo Code Roadmap Sep 11, 2025
@hannesrudolph hannesrudolph added PR - Needs Preliminary Review and removed Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. labels Sep 11, 2025
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intentional? I think these models have documented maxTokens.
Can you change these so that it matches the actual value of the models?

@daniel-lxs daniel-lxs moved this from PR [Needs Prelim Review] to PR [Changes Requested] in Roo Code Roadmap Sep 15, 2025
- Add wandb provider implementation with OpenAI-compatible API
- Add wandb models with full context window support (maxTokens = contextWindow)
- Add comprehensive error handling with localized messages
- Add thinking token support for reasoning models
- Add wandb API key settings to all language locales
- Add wandb error translations to all supported languages
- Support for multimodal models and reasoning effort
- Proper token usage tracking and cost calculation
@daniel-lxs daniel-lxs moved this from PR [Changes Requested] to PR [Needs Prelim Review] in Roo Code Roadmap Sep 15, 2025
@daniel-lxs
Copy link
Member

Is the failing unit test legit?

@daniel-lxs daniel-lxs moved this from PR [Needs Prelim Review] to PR [Changes Requested] in Roo Code Roadmap Sep 15, 2025
@shariqriazz
Copy link
Contributor Author

Is the failing unit test legit?

Unrelated to my changes i don't know why this keeps happening

@hannesrudolph
Copy link
Collaborator

We are not adding this provider at this time.

@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Sep 23, 2025
@github-project-automation github-project-automation bot moved this from PR [Changes Requested] to Done in Roo Code Roadmap Sep 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request PR - Changes Requested size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

3 participants